Search Results: "dato"

14 August 2022

Russ Allbery: Review: Still Not Safe

Review: Still Not Safe, by Robert L. Wears & Kathleen M. Sutcliffe
Publisher: Oxford University Press
Copyright: November 2019
ISBN: 0-19-027128-0
Format: Kindle
Pages: 232
Still Not Safe is an examination of the recent politics and history of patient safety in medicine. Its conclusions are summarized by the opening paragraph of the preface:
The American moral and social philosopher Eric Hoffer reportedly said that every great cause begins as a movement, becomes a business, and eventually degenerates into a racket. The reform movement to make healthcare safer is clearly a great cause, but patient safety efforts are increasingly following Hoffer's path.
Robert Wears was Professor of Emergency Medicine at the University of Florida specializing in patient safety. Kathleen Sutcliffe is Professor of Medicine and Business at Johns Hopkins. This book is based on research funded by a grant from the Robert Wood Johnson Foundation, for which both Wears and Sutcliffe were primary investigators. (Wears died in 2017, but the acknowledgments imply that at least early drafts of the book existed by that point and it was indeed co-written.) The anchor of the story of patient safety in Still Not Safe is the 1999 report from the Institute of Medicine entitled To Err is Human, to which the authors attribute an explosion of public scrutiny of medical safety. The headline conclusion of that report, which led nightly news programs after its release, was that 44,000 to 120,000 people died each year in the United States due to medical error. This report prompted government legislation, funding for new safety initiatives, a flurry of follow-on reports, and significant public awareness of medical harm. What it did not produce, in the authors' view, is significant improvements in patient safety. The central topic of this book is an analysis of why patient safety efforts have had so little measurable effect. The authors attribute this to three primary causes: an unwillingness to involve safety experts from outside medicine or absorb safety lessons from other disciplines, an obsession with human error that led to profound misunderstandings of the nature of safety, and the misuse of safety concerns as a means to centralize control of medical practice in the hands of physician-administrators. (The term used by the authors is "managerial, scientific-bureaucratic medicine," which is technically accurate but rather awkward.) Biggest complaint first: This book desperately needed examples, case studies, or something to make these ideas concrete. There are essentially none in 230 pages apart from passing mentions of famous cases of medical error that added to public pressure, and a tantalizing but maddeningly nonspecific discussion of the atypically successful effort to radically improve the safety of anesthesia. Apparently anesthesiologists involved safety experts from outside medicine, avoided a focus on human error, turned safety into an engineering problem, and made concrete improvements that had a hugely positive impact on the number of adverse events for patients. Sounds fascinating! Alas, I'm just as much in the dark about what those improvements were as I was when I started reading this book. Apart from a vague mention of some unspecified improvements to anesthesia machines, there are no concrete descriptions whatsoever. I understand that the authors were probably leery of giving too many specific examples of successful safety initiatives since one of their core points is that safety is a mindset and philosophy rather than a replicable set of actions, and copying the actions of another field without understanding their underlying motivations or context within a larger system is doomed to failure. But you have to give the reader something, or the book starts feeling like a flurry of abstract assertions. Much is made here of the drawbacks of a focus on human error, and the superiority of the safety analysis done in other fields that have moved beyond error-centric analysis (and in some cases have largely discarded the word "error" as inherently unhelpful and ambiguous). That leads naturally to showing an analysis of an adverse incident through an error lens and then through a more nuanced safety lens, making the differences concrete for the reader. It was maddening to me that the authors never did this. This book was recommended to me as part of a discussion about safety and reliability in tech and the need to learn from safety practices in other fields. In that context, I didn't find it useful, although surprisingly that's because the thinking in medicine (at least as presented by these authors) seems behind the current thinking in distributed systems. The idea that human error is not a useful model for approaching reliability is standard in large tech companies, nearly all of which use blameless postmortems for exactly that reason. Tech, similar to medicine, does have a tendency to be insular and not look outside the field for good ideas, but the approach to large-scale reliability in tech seems to have avoided the other traps discussed here. (Security is another matter, but security is also adversarial, which creates different problems that I suspect require different tools.) What I did find fascinating in this book, although not directly applicable to my own work, is the way in which a focus on human error becomes a justification for bureaucratic control and therefore a concentration of power in a managerial layer. If the assumption is that medical harm is primarily caused by humans making avoidable mistakes, and therefore the solution is to prevent humans from making mistakes through better training, discipline, or process, this creates organizations that are divided into those who make the rules and those who follow the rules. The long-term result is a practice of medicine in which a small number of experts decide the correct treatment for a given problem, and then all other practitioners are expected to precisely follow that treatment plan to avoid "errors." (The best distributed systems approaches may avoid this problem, but this failure mode seems nearly universal in technical support organizations.) I was startled by how accurate that portrayal of medicine felt. My assumption prior to reading this book was that the modern experience of medicine as an assembly line with patients as widgets was caused by the pressure for higher "productivity" and thus shorter visit times, combined with (in the US) the distorting effects of our broken medical insurance system. After reading this book, I've added a misguided way of thinking about medical error and risk avoidance to that analysis. One of the authors' points (which, as usual, I wish they'd made more concrete with a case study) is that the same thought process that lets a doctor make a correct diagnosis and find a working treatment is the thought process that may lead to an incorrect diagnosis or treatment. There is not a separable state of "mental error" that can be eliminated. Decision-making processes are more complicated and more integrated than that. If you try to prevent "errors" by eliminating flexibility, you also eliminate vital tools for successfully treating patients. The authors are careful to point out that the prior state of medicine in which each doctor was a force to themselves and there was no role for patient safety as a discipline was also bad for safety. Reverting to the state of medicine before the advent of the scientific-bureaucratic error-avoiding culture is also not a solution. But, rather at odds with other popular books about medicine, the authors are highly critical of safety changes focused on human error prevention, such as mandatory checklists. In their view, this is exactly the sort of attempt to blindly copy the machinery of safety in another field (in this case, air travel) without understanding the underlying purpose and system of which it's a part. I am not qualified to judge the sharp dispute over whether there is solid clinical evidence that checklists are helpful (these authors claim there is not; I know other books make different claims, and I suspect it may depend heavily on how the checklist is used). But I found the authors' argument that one has to design systems holistically for safety, not try to patch in safety later by turning certain tasks into rote processes and humans into machines, to be persuasive. I'm not willing to recommend this book given how devoid it is of concrete examples. I was able to fill in some of that because of prior experience with the literature on site reliability engineering, but a reader who wasn't previously familiar with discussions of safety or reliability may find much of this book too abstract to be comprehensible. But I'm not sorry I read it. I hadn't previously thought about the power dynamics of a focus on error, and I think that will be a valuable observation to keep in mind. Rating: 6 out of 10

2 March 2022

Fran ois Marier: Ways to refer to locahost in Chromium

The filter rules preventing websites from portscanning the local machine have recently been tightened in Brave. It turns out there are a surprising number of ways to refer to the local machine in Chromium. localhost and friends 127.0.0.1 is the first address that comes to mind when thinking of the local machine. localhost is typically aliased to that address (via /etc/hosts), though that convention is not mandatory. The IPv6 equivalent is [::1]. 0.0.0.0 0.0.0.0 is not a routable address, but that's what's used to tell a service to bind (listen) on all network interfaces. In Chromium, it resolves to the local machine, just like 127.0.0.1. The IPv6 equivalent is [::].

DNS-based Of course, another way to encode these numerical URLs is to create A / AAAA records for them under a domain you control. I've done this under my personal domain: For these to work, you'll need to:
  • Make sure you can connect to IPv6-only hosts, for example by connecting to an appropriate VPN if needed.
  • Put nameserver 8.8.8.8 in /etc/resolv.conf since you need a DNS server that will not filter these localhost domains. (For example, Unbound will do that if you use private-address: 127.0.0.0/8 in the server config.)
  • Go into chrome://settings/security and disable Always use secure connections to make sure the OS resolver is used.
  • Turn off the chrome://flags/#block-insecure-private-network-requests flag since that security feature (CORS-RFC1918) is designed to protect against these kinds of requests.
127.0.0.0/8 subnet Technically, the entire 127.0.0.0/8 subnet can used to refer to the local machine. However, it's not a reliable way to portscan a machine from a web browser because it only catches the services that listen on all interfaces (i.e. 0.0.0.0). For example, on my machine, if I nmap 127.0.0.1, I get:
PORT     STATE SERVICE   VERSION
22/tcp   open  ssh       OpenSSH 8.2p1
25/tcp   open  smtp      Postfix smtpd
whereas if I nmap 127.0.1.25, I only get:
PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 8.2p1
That's because I've got the following in /etc/postfix/main.cf:
inet_interfaces = loopback-only
which I assume is explicitly binding 127.0.0.1. Nevertheless, it would be good to get that fixed in Brave too.

11 December 2021

Neil Williams: Diversity and gender

As a follow on to a previous blog entry of mine, Free and Open, I feel it worthwhile to do my bit to dismantle the pseudo-science and over simplification in the idea that gender is binary at a biological level.
TL;DR: Science simply does not support binary sexes or binary genders. Truth is a bit more complicated.
There is certainty and there are binary answers in mathematics. Things get less definitive in physics, certainly as soon as quantum is broached. Processes become more of an equilibrium between states in chemistry, never wholly one or the other. Yes, there is the oddity of absolute zero but no experiment has yet achieved that fully. It is accurate to describe physics as a development of applied mathematics and to view chemistry as applied physics. Biology, at the biochemical level, is applied chemistry. The sciences build on each other, "on the shoulders of giants", but at each level, some certainty is lost, some amount of uncertainty is expanded and measurements become probabilities, proportions and percentages. Biology is dependent on biochemistry - chemistry is how a biological change results in a different organism. Physics is how that chemical change occurs - temperature, pressure and physical states are inherent to all chemical changes. Outside laboratory constraints, few chemical reactions, especially in organic chemistry, produce one and only one result from two or more known reagents. In biology, everyone is familiar with genetic mutations but a genetic mutation only happens because a biochemical reaction (hydrogen bonding of nucleobases) does not always produce the expected result. Every cell division, every viral infection, there is a finite probability that a change will occur. It might be a small number but it is never zero and can never be dismissed. This is obvious in the current Covid pandemic - genetic mutations result in new variants. Some variants are inviable, some variants produce no net change in the way that the viral particles infect adjacent cells. Sometimes, a mutation happens that changes everything. These mutations are not mistakes - these are simply changes with undetermined outcomes. Genetic changes are the foundation of biodiversity and variety is what allows lifeforms of all kinds to survive changes in environmental factors and/or changes in prevalent diseases. It is precisely the same in humans, particularly in one of the principle spheres of human life that involves replicating genetic material - the creation of gametes for sexual reproduction. Every single time any DNA is copied, there is a finite chance that a different base will be put in place compared to the original. Copying genetic material is therefore non-binary. Given precisely the same initial conditions, the result is not always predictable and the range of how the results vary from one to another increases with every iteration. Let me stress that - at the molecular level, no genetic operation in any biological lifeform has a truly binary result. Repeat that operation sufficiently often and an unexpected result WILL inevitably occur. It is a mathematical certainty that genetic changes will arise by attempting precisely the same genetic operation enough times. Genetic changes are fundamental to how lifeforms survive changing conditions. Life would likely have died out a long time ago on this planet if every genetic operation was perfect. Diversity is life. Similarity leads to extinction. Viral load is interesting at this point. Someone can be infected with a virus, including coronavirus, by encountering a small number of viral particles. Some viruses, it may be a few hundred, some viruses may need a few thousand particles to infect a vulnerable host. But here's the thing, for that host to be at risk of infecting another host, the virus needs the host to produce billions upon billions of copies of the virus by taking over the genetic machinery within a huge number of cells in the host. This, as is accepted with Covid, is before the virus has been copied enough times to produce symptoms in the host. Before those symptoms become serious, billions more copies will be made. The numbers become unimaginable - and that is within a single host, let alone the 265 million (and counting) hosts in the current Covid19 pandemic. It's also no wonder that viral infections cause tiredness, the infection is diverting huge resources to propagating itself - before even considering the activity of the immune system. It is idiocy of the highest order to expect all those copies to be identical. The rise of variants is inevitable - indeed essential - in all spheres of biology. A single viral particle is absolutely no threat of any kind - it must first get inside and then copy the genetic information in a host cell. This is where the complexity lies in the definition of life itself. A virus can be considered a lifeform but it is only able to reproduce using another, more complex, lifeform. In truth, a viral particle does not and cannot mutate. The infected host mutates the virus. The longer it takes that host to clear the infection, the more mutations that host will create and then potentially spread to others. Now apply this to the creation of gametes in humans. With seven billion humans, the amount of copying of genetic material is not as large as the pandemic but it is still easy for everyone to understand that children do not merely combine the DNA of both parents. Changes happen. Human sexual reproduction is not as simple as 1 + 1 = 2. Sometimes, the copying of the genetic material produces an unexpected result. Sexual reproduction itself is non-binary. Sexual reproduction is not easy or simple for lifeforms to adopt - the diversity which results from the non-binary operations are exactly why so many lifeforms invest so much energy in reproducing in this way. Whilst many genetic changes in humans will be benign or beneficial, I d like to take an example of a genetic disorder that results from the non-binary nature of sex. Humans can be born with the XY phenotype - i.e. at a genetic level, the individual has the same combination of chromosomes as another XY individual but there are changes within the genes in those chromosomes. We accept this, some children of blonde parents do not have blonde hair, etc. There are also genetic changes where an XY phenotype is not binary. Some people, who at a genetic level would be almost identical to another person who is genetically male, have a genetic mutation which makes it impossible for the cells of that individual to respond to androgens (testosterone). (See Androgen insensitivity syndrome). Genetically, that individual has an X and a Y chromosome, just like many other individuals. However, due to a change in how the genes on those chromosomes were copied, that individual is biologically incapable of constructing the secondary sexual characteristics of a male. At a genetic level, the individual has the XY phenotype of a male. At the physical level, the individual has all the sexual characteristics of a female and none of the sexual characteristics of a male. The gender of that individual is not binary. Treatment is centred on supporting the individual and minimising some risks from the inactive genes on the Y chromosome. Human sexual reproduction is non-binary. The results of any sexual reproduction in humans will not always produce the binary option of male or female. It is a lie to claim that human gender is binary. The science is in plain view and cannot be ignored. Identifying as non-binary is not a "cop out" - it can be a biological, genetic, scientific fact. Human sexuality and gender are malleable. Where genetic changes result in symptoms, these can be ameliorated by treatment with human sex hormones, like oestrogen and testosterone. There are valid medical uses for anabolic steroids and hormone replacement therapies to help individuals who, at a genetic level, have non-binary gender. These treatments can help align the physical outer signs with the personality and identity of the individual, whether with or without surgery. It is unacceptable to abandon such people to suffer life long discrimination and harassment by imposing a binary definition that has no basis in science. When a human being has an XY phenotype, that human being is not necessarily male. That individual will be on a spectrum from female (left unaffected by sex hormones in the womb, the foetus will be female, even with an X and a Y chromosome), to various degrees of male. So, at a genetic, biological level, it is a scientific fact that human beings do not have binary gender. There is no evidence that this is new to the modern era, there is no scientific basis for thinking that copying of genetic material was somehow perfectly reliable in earlier history, or that such mutations are specific to homo sapiens. Changes in genetic material provide the diversity to fight infections and adapt to changing environmental factors. Species have and will continue to go extinct if this diversity is absent. With that out of the way, it is no longer a stretch to encompass other aspects of human non-binary genders beyond the known genetic syndromes based on changes in the XY phenotype. Science has not uncovered all of the ways that genes affect personality, behaviour, or identity. How other, less studied, genetic changes affect the much more subtle human facets, especially anything to do with consciousness, identity, personality, sexuality and behaviour, is guesswork. All of these facets can and likely are being affected by genetic factors as well as environmental factors in an endless range of permutations. Personality traits are a beautiful and largely unknowable blend of genes and environment. Genetic information has a finite probability of changes at each and every iteration. Environmental factors are more akin to chaos theory. The idea that the results will fit into binary constructs is laughable. Human society puts huge emphasis on societal norms. Individuals who do not fit into those norms suffer discrimination. The norms themselves have evolved over time as a response to various influences on human civilisation but most are not based on science. It is up to all humans in that society to call out discrimination, to call for changes in the accepted norms and support those who are marginalised. It is a precarious balance, one that humans rarely get right, but it must be based on an acceptance that variation is the natural state. Artificial constraints, like binary genders, must be dismantled because human beings and human sexual reproduction are not binary. To those who think, "well it is for 99%", think again about Covid. 99% (or closer to 98%) of infected humans recover without notable after effects. That has still crippled the nations of the globe and humbled all those who tried to deny it. Five million human beings are dead because "most infected people recover". Just because something only affects a proportion of human beings does not invalidate the suffering of those humans and the discrimination that those humans will face. Societal norms are not necessarily correct. Religious and other influences typically obscure and ignore scientific fact and undermine human kindness. The scientific truth of life on this planet is that gender is not binary. The more complex the lifeform, the more factors will affect where on the spectrum any one individual will appear. Just because we do not yet fully understand how genes affect human personality and sexuality, does not invalidate the science that variation is the natural order. My previous blog about diversity is not just about male vs female, one nationality vs another, one ethnicity compared to another. Diversity is diverse. Diversity requires accepting that every facet of humanity is subject to variation. That leads to tension at times, it is inevitable. Tension against societal norms, tension against discrimination, tension around those individuals who would abuse the tolerance of others for their own gratification or from their own ignorance. None of us are perfect, none of us have any of this fully sorted and all of us will make mistakes. Personally, I try to respect those around me. I will use whatever pronouns and other conventions that the person requests, from their perspective and not mine. To do otherwise is to deny the natural order and to deny the science. Celebrate all diversity, it is the very stuff of life. The discussions around (typically female) bathroom facilities often miss the point. The concern is not about individuals who describe themselves as non-binary. The concern is about individuals who are fully certain of their own sexuality and who act as sexual predators for their own gratification. These people are acting out a lie for their own ends. The problem people are the predators, so stop blaming the victims who are just as at risk as anyone else who identifies as female. Maybe the best people to spot such predators are those who are non-binary, who have had to pretend to fit into societal norms. Just as travel can be a good antidote to racism, openness and discussion can be a tool to undermine the lies of sexual predators and reassure those who are justifiably fearful. There can never be a biological binary test of gender, there can never be any scientific justification for binary division of facilities. Humanity itself is not binary, even life itself has blurry borders around comas, suspended animation and locked-in syndrome. Legal definitions of human death vary around the world. The only common thread I have ever found is: Be kind to each other. If you find anything above objectionable, then I can only suggest that you reconsider the science and learn to be kind to your fellow humans. None of us are getting out of this alive. I Think You ll Find It s a Bit More Complicated Than That - Ben Goldacre ISBN 978-0-00-750514-2 https://www.amazon.co.uk/dp/B00HATQA8K/ https://en.wikipedia.org/wiki/Androgen_insensitivity_syndrome https://www.bbc.co.uk/news/world-51235105 https://en.wikipedia.org/wiki/Nucleobase My degree is in pharmaceutical sciences and I practised community and hospital pharmacy for 20 years before moving into programming. I have direct experience of supporting people who were prescribed hormones to transition their physical characteristics to match their personal identity. I had a Christian upbringing but my work showed me that those religious norms were incompatible with being kind to others, so I rejected religion and I now consider myself a secular humanist.

22 September 2021

Ian Jackson: Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way. Audiences for this post Background and context Error handling principles Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config. If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong. Rust's portability aims The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly. That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever). Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error. Rust's stability aims and approach Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:
  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.
By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves. This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious. Rust enums, as relevant to io::ErrorKind (Very briefly:) When you have a value which is an io::ErrorKind, you can compare it with specific values:
    if error.kind() == ErrorKind::NotFound   ...
  
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind()  
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file  :  ", &file, &error),
     
  
Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile. Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling. Improving the error categorisation The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months. The trouble with Other and tests Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all". Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct. But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other. Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors. Unfortunately, the documentation note
Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything. The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code. Chosen solution: Uncategorized The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution: There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised. This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change. The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect. The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like. Alternatives considered and rejected by the Rust developers Not adding more ErrorKinds This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time. Just adding ErrorKinds as had been done before This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally. Somehow using Rust's Edition system The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually. It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling. Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow. How to fix code broken by this change Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right. How to fix thorough tests The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure". What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written". IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right. You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures. Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural. Conclusions This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good. It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.
edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips


comment count unavailable comments

1 September 2021

Holger Levsen: 20210901-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021 Moin! I'm glad to finally be able to send out this invitation for the "Debian Reunion Hamburg 2021" taking place at the venue of the 2018 & 2019 MiniDebConfs! The event will run from Monday, Sep 27 2021 until Friday Oct 1 2021, with Sunday, Sep 26 2021 as arrival day. IOW, Debian people meet again in Hamburg. The exact format is less defined and structured than previous years, probably we will just be hacking from Monday to Wednesday, have talks on Thursday and a nice day trip on Friday. Please read https://wiki.debian.org/DebianEvents/de/2021/DebianReunionHamburg if you intend to attend, especially Probably having some video coverage would be very nice to have, though due to this very late announcement I'm not sure we'll really have talks and the need for video. The event is in 3.5 weeks and will take place, either as a very small hack meeting, or somewhat bigger. We certainly want videoing if we have talks - and if you could help with this that would be very great! Last and definitly not least, financial sponsors for the event would be great. If you can support the "Debian Reunion Hamburg 2021", please contact me directly! Now, late, after weeks of wondering if and how to do this event, I'm finally and very much looking forward to it, to meet some Debian folks at least & for some shared Debian hacking. Definitly not the 2021 event I had in mind after the 2019 one, but something I feel I can responsibly do & enjoy. So, hoping to see some of you soon & most of you later! ;-) Sad but true, and at least something for some people. We should all do more local events. And more online events too, eg I think this is a great idea too: https://wiki.debian.org/DebianEvents/internet/2021/MiniDebConfOnlineBookworm See you!

13 July 2021

Debian XMPP Team: XMPP Novelties in Debian 11 Bullseye

This is not only the Year of the Ox, but also the year of Debian 11, code-named bullseye. The release lies ahead, full freeze starts this week. A good opportunity to take a look at what is new in bullseye. In this post new programs and new software versions related to XMPP, also known as Jabber are presented. XMPP exists since 1999, and has a diverse and active developers community. It is a universal communication protocol, used for instant messaging, IoT, WebRTC, and social applications. You probably will encounter some oxen in this post. That's all for now. Enjoy Debian 11 bullseye and Happy Chatting!

20 June 2021

Russ Allbery: Review: The Magician's Nephew

Review: The Magician's Nephew, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #6
Publisher: Collier Books
Copyright: 1955
Printing: 1978
ISBN: 0-02-044230-0
Format: Mass market
Pages: 186
The Magician's Nephew is the sixth book of the Chronicles of Narnia in the original publication order, but it's a prequel, set fifty years before The Lion, the Witch and the Wardrobe. It's therefore put first in the new reading order. I have always loved world-building and continuities and, as a comics book reader (Marvel primarily), developed a deep enjoyment of filling in the pieces and reconstructing histories from later stories. It's no wonder that I love reading The Magician's Nephew after The Lion, the Witch and the Wardrobe. The experience of fleshing out backstory with detail and specifics makes me happy. If that's also you, I recommend the order in which I'm reading these books. Reading this one first is defensible, though. One of the strongest arguments for doing so is that it's a much stronger, tighter, and better-told story than The Lion, the Witch and the Wardrobe, and therefore might start the series off on a better foot for you. It stands alone well; you don't need to know any of the later events to enjoy this, although you will miss the significance of a few things like the lamp post and you don't get the full introduction to Aslan. The Magician's Nephew is the story of Polly Plummer, her new neighbor Digory Kirke, and his Uncle Andrew, who fancies himself a magician. At the start of the book, Digory's mother is bed-ridden and dying and Digory is miserable, which is the impetus for a friendship with Polly. The two decide to explore the crawl space of the row houses in which they live, seeing if they can get into the empty house past Digory's. They don't calculate the distances correctly and end up in Uncle Andrew's workroom, where Digory was forbidden to go. Uncle Andrew sees this as a golden opportunity to use them for an experiment in travel to other worlds. MAJOR SPOILERS BELOW. The Magician's Nephew, like the best of the Narnia books, does not drag its feet getting started. It takes a mere 30 pages to introduce all of the characters, establish a friendship, introduce us to a villain, and get both of the kids into another world. When Lewis is at his best, he has an economy of storytelling and a grasp of pacing that I wish was more common. It's also stuffed to the brim with ideas, one of the best of which is the Wood Between the Worlds. Uncle Andrew has crafted pairs of magic rings, yellow and green, and tricks Polly into touching one of the yellow ones, causing her to vanish from our world. He then uses her plight to coerce Digory into going after her, carrying two green rings that he thinks will bring people back into our world, and not incidentally also observing that world and returning to tell Uncle Andrew what it's like. But the world is more complicated than he thinks it is, and the place where the children find themselves is an eerie and incredibly peaceful wood, full of grass and trees but apparently no other living thing, and sprinkled with pools of water. This was my first encounter with the idea of a world that connects other worlds, and it remains the most memorable one for me. I love everything about the Wood: the simplicity of it, the calm that seems in part to be a defense against intrusion, the hidden danger that one might lose one's way and confuse the ponds for each other, and even the way that it tends to make one lose track of why one is there or what one is trying to accomplish. That quiet forest filled with pools is still an image I use for infinite creativity and potential. It's quiet and nonthreatening, but not entirely inviting either; it's magnificently neutral, letting each person bring what they wish to it. One of the minor plot points of this book is that Uncle Andrew is wrong about the rings because he's wrong about the worlds. There aren't just two worlds; there are an infinite number, with the Wood as a nexus, and our reality is neither the center nor one of an important pair. The rings are directional, but relative to the Wood, not our world. The kids, who are forced to experiment and who have open minds, figure this out quickly, but Uncle Andrew never shifts his perspective. This isn't important to the story, but I've always thought it was a nice touch of world-building. Where this story is heading, of course, is the creation of Narnia and the beginning of all of the stories told in the rest of the series. But before that, the kids's first trip out of the Wood is to one of the best worlds of children's fantasy: Charn. If the Wood is my mental image of a world nexus, Charn will forever be my image of a dying world: black sky, swollen red sun, and endless abandoned and crumbling buildings as far as the eye can see, full of tired silences and eerie noises. And, of course, the hall of statues, with one of the most memorable descriptions of history and empire I've ever read (if you ignore the racialized description):
All of the faces they could see were certainly nice. Both the men and women looked kind and wise, and they seemed to come of a handsome race. But after the children had gone a few steps down the room they came to faces that looked a little different. These were very solemn faces. You felt you would have to mind your P's and Q's, if you ever met living people who looked like that. When they had gone a little farther, they found themselves among faces they didn't like: this was about the middle of the room. The faces here looked very strong and proud and happy, but they looked cruel. A little further on, they looked crueller. Further on again, they were still cruel but they no longer looked happy. They were even despairing faces: as if the people they belonged to had done dreadful things and also suffered dreadful things.
The last statue is of a fierce, proud woman that Digory finds strikingly beautiful. (Lewis notes in an aside that Polly always said she never found anything specially beautiful about her. Here, as in The Silver Chair, the girl is the sensible one and things would have gone better if the boy had listened to her, a theme that I find immensely frustrating because Susan was the sensible one in the first two books of the series but then Lewis threw that away.) There is a bell in the middle of this hall, and the pillar that holds that bell has an inscription on it that I think every kid who grew up on Narnia knows by heart.
Make your choice, adventurous Stranger;
Strike the bell and bide the danger,
Or wonder, till it drives you mad,
What would have followed if you had.
Polly has no intention of striking the bell, but Digory fights her and does it anyway, waking Jadis from where she sat as the final statue in the hall and setting off one of the greatest reimaginings of a villain in children's literature. Jadis will, of course, become the White Witch who holds Narnia in endless winter some thousand Narnian years later. But the White Witch was a mediocre villain at best, the sort of obvious and cruel villain common in short fairy tales where the author isn't interested in doing much characterization. She exists to be evil, do bad things, and be defeated. She has a few good moments in conflict with Aslan, but that's about it. Jadis in this book is another matter entirely: proud, brilliant, dangerous, and creative. The death of everything on Charn was Jadis's doing: an intentional spell, used to claim a victory of sorts from the jaws of defeat by her sister in a civil war. (I find it fascinating that Lewis puts aside his normally sexist roles here.) Despite the best attempts of the kids to lose her both in Charn and in the Wood (which is inimical to her, in another nice bit of world-building), she manages to get back to England with them. The result is a remarkably good bit of villain characterization. Jadis is totally out of her element, used to a world-spanning empire run with magic and (from what hints we get) vaguely medieval technology. Her plan to take over their local country and eventually the world should be absurd and is played somewhat for laughs. Her magic, which is her great weapon, doesn't even work in England. But Jadis learns at a speed that the reader can watch. She's observant, she pays attention to things that don't fit her expectations, she changes plans, and she moves with predatory speed. Within a few hours in London she's stolen jewels and a horse and carriage, and the local police seem entirely overmatched. There's no way that one person without magic should be a real danger to England around the turn of the 20th century, but by the time the kids manage to pull her back into the Wood, you're not entirely sure England would have been safe. A chaotic confrontation, plus the ability of the rings to work their magic through transitive human contact, ends up with the kids, Uncle Andrew, Jadis, a taxicab driver and his horse all transported through the Wood to a new world. In this case, literally a new world: Narnia at the point of its creation. Here again, Lewis translates Christian myth, in this case the Genesis creation story, into a more vivid and in many ways more beautiful story than the original. Aslan singing the world into existence is an incredible image, as is the newly-created world so bursting with life that even things that normally could not grow will do so. (Which, of course, is why there is a lamp post burning in the middle of the western forest of Narnia for the Pevensie kids to find later.) I think my favorite part is the creation of the stars, but the whole sequence is great. There's also an insightful bit of human psychology. Uncle Andrew can't believe that a lion is singing, so he convinces himself that Aslan is not singing, and thus prevents himself from making any sense of the talking animals later.
Now the trouble about trying to make yourself stupider than you really are is that you very often succeed.
As with a lot in Lewis, he probably meant this as a statement about faith, but it generalizes well beyond the religious context. What disappointed me about the creation story, though, is the animals. I didn't notice this as a kid, but this re-read has sensitized me to how Lewis consistently treats the talking animals as less than humans even though he celebrates them. That happens here too: the newly-created, newly-awakened animals are curious and excited but kind of dim. Some of this is an attempt to show that they're young and are just starting to learn, but it also seems to be an excuse for Aslan to set up a human king and queen over them instead of teaching them directly how to deal with the threat of Jadis who the children inadvertently introduced into the world. The other thing I dislike about The Magician's Nephew is that the climax is unnecessarily cruel. Once Digory realizes the properties of the newly-created world, he hopes to find a way to use that to heal his mother. Aslan points out that he is responsible for Jadis entering the world and instead sends him on a mission to obtain a fruit that, when planted, will ward Narnia against her for many years. The same fruit would heal his mother, and he has to choose Narnia over her. (It's a fairly explicit parallel to the Garden of Eden, except in this case Digory passes.) Aslan, in the end, gives Digory the fruit of the tree that grows, which is still sufficient to heal his mother, but this sequence made me angry when re-reading it. Aslan knew all along that what Digory is doing will let him heal his mother as well, but hides this from him to make it more of a test. It's cruel and mean; Aslan could have promised to heal Digory's mother and then seen if he would help Narnia without getting anything in return other than atoning for his error, but I suppose that was too transactional for Lewis's theology or something. Meh. But, despite that, the only reason why this is not the best Narnia book is because The Voyage of the Dawn Treader is the only Narnia book that also nails the ending. The Magician's Nephew, up through Charn, Jadis's rampage through London, and the initial creation of Narnia, is fully as good, perhaps better. It sags a bit at the end, partly because it tries to hard to make the Narnian animals humorous and partly because of the unnecessary emotional torture of Digory. But this still holds up as the second-best Narnia book, and one I thoroughly enjoyed re-reading. If anything, Jadis and Charn are even better than I remembered. Followed by the last book of the series, the somewhat notorious The Last Battle. Rating: 9 out of 10

5 June 2021

Utkarsh Gupta: FOSS Activites in May 2021

Here s my (twentieth) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This was my 29th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ Interesting month, surprisingly. Lots of things happening and lots of moving parts; becoming the new normal , I believe. Anyhow, working on Ubuntu full-time has its own advantage and one of them is being able to work on Debian stuff! So whilst I couldn t upload a lot of packages because of the freeze, here s what I worked on:

Uploads and bug fixes:

Other $things:
  • Mentoring for newcomers and assisting people in BSP.
  • Moderation of -project mailing list.

Ubuntu
This was my 4th month of actively contributing to Ubuntu. Now that I ve joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/ This month, by all means, was dedicated mostly to PHP 8.0, transitioning from PHP 7.4 to 8.0. Naturally, it had so many moving parts and moments of utmost frustration, shared w/ Bryce. :D So even though I can t upload anything, I worked on the following stuff & asked for sponsorship.
But before, I d like to take a moment to stress how kind and awesome Gianfranco Costamagna, a.k.a. LocutusOfBorg is! He s been sponsoring a bunch of my things & helping with re-triggers, et al. Thanks a bunch, Gianfranco; beers on me whenever we meet!

Merges:

Uploads & Syncs:

MIRs:

Seed Operations:

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my twentieth month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 29.75 hours for LTS and 40.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:
  • Front-desk duty from 24-05 until 30-05 for both LTS and ELTS.
  • Triaged rails, libimage-exiftool-perl, hivex, graphviz, glibc, libexosip2, impacket, node-ws, thunar, libgrss, nginx, postgresql-9.6, ffmpeg, composter, and curl.
  • Mark CVE-2019-9904/graphviz as ignored for stretch and jessie.
  • Mark CVE-2021-32029/postgresql-9.6 as not-affected for stretch.
  • Mark CVE-2020-24020/ffmpeg as not-affected for stretch.
  • Mark CVE-2020-22020/ffmpeg as postponed for stretch.
  • Mark CVE-2020-22015/ffmpeg as ignored for stretch.
  • Mark CVE-2020-21041/ffmpeg as postponed for stretch.
  • Mark CVE-2021-33574/glibc as no-dsa for stretch & jessie.
  • Mark CVE-2021-31800/impacket as no-dsa for stretch.
  • Mark CVE-2021-32611/libexosip2 as no-dsa for stretch.
  • Mark CVE-2016-20011/libgrss as ignored for stretch.
  • Mark CVE-2021-32640/node-ws as no-dsa for stretch.
  • Mark CVE-2021-32563/thunar as no-dsa for stretch.
  • [LTS] Help test and review bind9 update for Emilio.
  • [LTS] Suggest and add DEP8 tests for bind9 for stretch.
  • [LTS] Sponsored upload of htmldoc to buster for Havard as a consequence of #988289.
  • [ELTS] Fix triage order for jetty and graphviz.
  • [ELTS] Raise issue upstream about cloud-init; mock tests instead.
  • [ELTS] Write to private ELTS list about triage ordering.
  • [ELTS] Review Emilio s new script and write back feedback, mentioning extra file created, et al.
  • [ELTS/LTS] Raise upgrade problems from LTS -> LTS+1 to the list. Thread here.
    • Further help review and raise problems that could occur, et al.
  • [LTS] Help explain path forward for firmware-nonfree update to Ola. Thread here.
  • [ELTS] Revert entries of TEMP-0000000-16B7E7 and TEMP-0000000-1C4729; CVEs assigned & fix ELTS tracker build.
  • Auto EOL ed linux, libgrss, node-ws, and inspircd for jessie.
  • Attended monthly Debian LTS meeting, which didn t happen, heh.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

25 May 2021

Shirish Agarwal: Pandemic, Toolkit and India

Pandemic Situation in India. I don t know from where I should start. This is probably a good start. I actually would recommend Indiacable as they do attempt to share some things happening in India from day to day but still there is a lot thatt they just can t cover, nobody can cover. There were two reports which kind of shook me all inside. One which sadly came from the UK publication Independent, probably as no Indian publication would dare publish it. The other from Rural India. I have been privileged in many ways, including friends who have asked me if I need any financial help. But seeing reports like above, these people need more help, guidance and help than I. While I m never one to say give to Foundations. If some people do want to help people from Maharashtra, then moneylifefoundation could be a good place where they could donate. FWIW, they usually use the foundation to help savers and investors be safe and help in getting money when taken by companies with dubious intentions. That is their drive. Two articles show their bent. The first one is about the Algo scam which I have written previously about the same in this blog. Interestingly, when I talk about this scam, all Modi supporters are silent. The other one does give some idea as to why the Govt. is indifferent. That is going to a heavy cross for all relatives to bear. There has been a lot that has been happening. Now instead of being limited to cities, Covid has now gone hinterland in a big way. One could ask also Praveen as he probably knows what would be good for Kerala and surrounding areas. The biggest change, however, has been that India is now battling not just the pandemic but also Mucormycosis also known as black fungus and its deadlier cousin the white fungus. Mucormycosis came largely due to an ill-advise given that applying cow dung gives protection to Corona. And many applied it due to faith. And people who know science do know that in fact it has that bacteria. Sadly, those of us who are and were more interested in law, computer science etc. has now also have to keep on top of what is happening in the medical field. It isn t that I hate it, but it has a lot of costs. From what I could gather on various social media and elsewhere, a single injection of anti-fungal for the above costs INR 3k/- and that needs to be 5 times in a day and that course has to be for three weeks. So even the relatively wealthy people can and will become poor in no time. No wonder thousands of those went to UK, US, Dubai or wherever they could find safe-harbor from the pandemic with no plans of arriving back soon. There was also the whole bit about FBS or Fetal Bovin Serum. India ordered millions of blood serum products from abroad and continues to. This was quickly shut down as news on Social Media. Apparently, it is only the Indian cow which is worthy of reverence. All other cows and their children are fair game according to those in power. Of course, that discussion was quickly shut down as was the discussion about IGP (Indian Genome Project). People over the years had asked me why India never participated for the HGP (Human Gnome Project). I actually had no answer for that. Then in 2020, there was idea of IGP which was put up and then it was quickly shot down as the results could damage a political party s image. In fact, a note to people who want to join Indian civil services tells the reason exactly. While many countries in the world are hypocrites, including the U.S. none can take the place that India has made for itself in that field.

The Online experience The vaccination process has been made online and has led to severe heartburn and trouble for many including many memes. For e.g.

Daily work, get up, have a bath, see if you got a slot on the app, sleep.
People trying desperately to get a slot, taken from Hindi Movie Dilwale Dulhania Le Jaygenge.
Just to explain what is happening, one has to go to the website of cowin. Sharing a screenshot of the same.
Cowin app. sceeenshot
I have deliberately taken a screenshot of the cowin app. in U.P. which is one of the areas where the ruling party, BJP has. I haven t taken my state for the simple reason, even if a slot is open, it is of no use as there are no vaccines. As have been shared in India Cable as well as in many newspapers, it is the Central Govt. which holds the strings for the vaccines. Maharashtra did put up an international tender but to no effect. All vaccine manufacturers want only Central Govt. for purchases for multiple reasons. And GOI is saying it has no money even though recently it got loans as well as a dividend from RBI to the tune of 99k crore. For what all that money is, we have no clue. Coming back though, to the issue at hand. the cowin app. is made an open api. While normally, people like us should and are happy when an API is open, it has made those who understand how to use git, compile, etc. better than others. A copy of the public repo. of how you can do the same can be found on Github. Now, obviously, for people like me and many others it has ethical issues.

Kiran s Interview in Times of India (TOI) There isn t much to say apart from I haven t used it. I just didn t want to. It just is unethical. Hopefully, in the coming days GOI does something better. That is the only thing we are surviving on, hope.

The Toolkit saga A few days before, GOI shared a toolkit apparently made by Congress to defame the party in power. That toolkit was shared before the press and Altnews did the investigation and promptly shredded the claims. Congress promptly made an FIR in Chhattisgarh where it is in power. The gentleman who made the claims Mr. Sambit Patra refused to appear against the police without evidence citing personal reasons and asking 1 week to appear before them. Apart from Altnews which did a great job, sadly many people didn t even know that there is something called WYSIWYG. I had to explain that so many Industries, whether it is politics, creative industries, legal, ad industries, medical transcription, and imaging all use this, and all the participants use the same version of the software. The reason being that in most Industries, there is a huge loss and issue of legal liabilities if something untoward happens. For e.g. if medical transcription is done in India is wrong (although his or her work will be checked by a superior in the West), but for whatever reason is not, and a wrong diagnosis is put (due to wrong color or something) then a patient could die and the firm who does that work could face heavy penalties which could be the death of them. There is another myth that Congress has unlimited wealth or huge wealth. I asked if that was the case, why didn t they shift to Mac. Of course, none have answers on this one. There is another reason why they didn t want to appear. The Rona Wilson investigation by Arsenal Experts also has made them cautious. Previously, they had a free run. Nowadays, software forensic tools are available to one and all. For e.g. Debian itself has a good variety of tools for the same. I remember Vipin s sharing few years back. For those who want to start, just install the apps. and try figuring out. Expertise on using the tools takes years though, as you use the tool day in night. Update 25/05/2021 Apparently because Twitter made and showcased few tweets as Manipulated Media , those in Govt. are and were dead against it. So they conducted a raid against Twitter India headquarters, knowing fully well that there would be nobody except security. The moment I read this, my mind went to the whole Fruit of the poisonous tree legal doctrine. Sadly though, India doesn t recognize it and in fact, still believes in the pre-colonial era that evidence however collected is good. A good explanation of the same can be found here. There are some exceptions to the rule, but they are done so fine that more often than not, they can t be used in the court of law in India. Although a good RTI was shared by Mr. Saket Gokhale on the same issue, which does raise some interesting points
Twitter India Raid, Saket Gokhale RTI 1
Saket Gokhale RTI query , Twitter India Raid 2
FWIW, Saket has been successful in getting his prayers heard either as answers to RTI queries or then following it up in the various High Courts of India. Of course, those who are in the ruling party ridicule him but are unable to find faults in his application of logic. And quite a few times, I have learned from his applications as well as nuances or whatever is there in law, a judgment or a guideline which he invokes in his prayer. For e.g. the Lalitha Kumari Guidelines which the gentleman has shared in his prayer can be found here. Hence now, it would be upto the Delhi Police Cell to prove their case in response to RTI. He has also trapped them as he has shared they can t give excuses/exemptions which they have tried before. As I had shared earlier, High Courts in India have woken up, whether it is Delhi, Mumbai, Aurangabad, Madhya Pradesh, Uttar Pradesh, Odisha or Kerala. Just today i.e. on 25th May 2021, Justices Bela Trivedi and Justice Kalra had asked how come all the hospitals don t have NOC from the Fire De[partment. They also questioned the ASG (Assistant Solicitor General) as how BU (Building Use Certificate) has been granted as almost all the 400 hospitals are in residential area. To which the ASG replies, it is the same state in almost 4000 schools as well as 6000 odd factories in Ahemdabad alone, leave the rest of the district and state alone. And this is when last year strict instuctions were passed. They chose to do nothing sadly. I will share a link on this when bar and bench gives me  The Hindu also shared the whole raid on twitter saga.

Conclusion In conclusion, I sincerely do not where we are headed. The only thing I know is that we cannot expect things to be better before year-end and maybe even after that. It all depends on the vaccines and their availability. After that ruralindia article, I had to see quite a few movies and whatnot just to get that out of my head. And this is apart from the 1600 odd teachers and workers who have died in the U.P. poll duty. Now, what a loss, not just to the family members of the victims, but a whole generation of school children who would not be able to get quality teaching and be deprived of education. What will be their future, God only knows. The only good Bollywood movie which I saw was Ramprasad ki Teravi . The movie was an accurate representation of most families in and around me. There was a movie called Sansar (1987) which showed the breakup of the joint family and into a nuclear family. This movie could very well have been a continuation of the same. Even Marathi movies which at one time were very progressive have gone back to the same boy, girl love story routine. Sameer, though released in late 2020, was able to see it only recently. Vakeel Saab was an ok copy of Pink . I loved Sameer as, unlike Salman Khan films, it showed pretty much an authentic human struggle of a person who goes to the Middle East without any qualifications and works as a laborer and the trials he goes through. Somehow, Malayalam movies have a knack for showing truth without much of budget. Most of the Indian web series didn t make an impact. I think many of them were just going through the motions, it seems as everybody is concerned with the well-being of their near and dear ones. There was also this (Trigger Warning: This story discusses organized campaigns glorifying and advocating sexual violence against Muslim women.) Hoping people somehow make it to the other side of the pandemic.

16 May 2021

Carl Chenet: How to save up to 500 /year switching from Mailchimp to Open Source Mailtrain and AWS SES

My newsletter Le Courrier du hacker (3,800 subscribers, 176 issues) is 3 years old and Mailchimp costs were becoming unbearable for a small project ($50 a month, $600 a year), with still limited revenues nowadays. Switching to the Open Source Mailtrain plugged to the AWS Simple Email Service (SES) will dramatically reduce the associated costs. First things first, thanks a lot to Pierre-Gilles Leymarie for his own article about switching to Mailtrain/SES. I owe him (and soon you too) so much. This article will be a step-by-step about how to set up Mailtrain/SES on a dedicated server running Linux. What s the purpose of this article? Mailchimp is more and more expensive following the growth of your newsletter subscribers and you need to leave it. You can use Mailtrain, a web app running on your own server and use the AWS SES service to send emails in an efficient way, avoiding to be flagged as a spammer by the other SMTP servers (very very common, you can try but you have been warned against  Prerequisites You will need the following prerequisites : Steps This is a fairly straightforward setup if you know what you re doing. In the other case, you may need the help of a professional sysadmin. You will need to complete the following steps in order to complete your setup: Configure AWS SES Verify your domain You need to configure the DKIM to certify that the emails sent are indeed from your own domain. DKIM is mandatory, it s the de-facto standard in the mail industry. Ask to verify your domain
Ask AWS SES to verify a domain
Generate the DKIM settings
Generate the DKIM settings
Use the DKIM settings
Now you have your DKIM settings and Amazon AWS is waiting for finding the TXT field in your DNS zone. Configure your DNS zone to include DKIM settings I can t be too specific for this section because it varies A LOT depending on your DNS provider. The keys is: as indicated by the previous image you have to create one TXT record and two CNAME records in your DNS zone. The names, the types and the values are indicated by AWS SES. If you don t understand what s going here, there is a high probabiliy you ll need a system administrator to apply these modifications and the next ones in this article. Am I okay for AWS SES ? As long as the word verified does not appear for your domain, as shown in the image below, something is wrong. Don t wait too long, you have a misconfiguration somewhere.
AWS SES pending verification
When your domain is verified, you ll also receive an email to inform you about the successful verification. SMTP settings The last step is generating your credentials to use the AWS SES SMTP server. IT is really straightforward, providing the STMP address to use, the port, and a pair of username/password credentials.
AWS SES SMTP settings and credentials
Just click on Create My SMTP Credentials and follow the instructions. Write the SMTP server address somewhere and store the file with credentials on your computer, we ll need them below. Configure your server As we said before, we need a baremetal server or a virtual machine running a recent Linux. Configure your MySQL/MariaDB database We create a user mailtrain having all rights on a new database mailtrain.
MariaDB [(none)]> create database mailtrain;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE USER 'mailtrain' IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON mailtrain.* TO 'mailtrain'@localhost IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
  Database            
+--------------------+
  information_schema  
  mailtrain           
  mysql               
  performance_schema  
+--------------------+
6 rows in set (0.00 sec)
MariaDB [(none)]> Bye
Configure your web server I use Nginx and I ll give you the complete setup for it, including generating Let s Encrypt. Configure Let s Encrypt You need to stop Nginx as root: systemctl stop nginx Then get the certificate only, I ll give the Nginx Vhost configuration: certbot certonly -d mailtrain.toto.com Install Mailtrain On your server create the following directory: mkdir -p /var/www/
cd /var/www
wget https://github.com/Mailtrain-org/mailtrain/archive/refs/tags/v1.24.1.tar.gz
tar zxvf v1.24.1.tar.gz
Modify the file /var/www/mailtrain/config/production.toml to use the MySQL settings:
[mysql]
host="localhost"
user="mailtrain"
password="V3rYD1ff1culT!"
database="mailtrain"
Now launch the Mailtrain process in a screen:
screen
NODE_ENV=production npm start
Now Mailtrain is launched and should be running. Yeah I know it s ugly to launch like this (root process in a screen, etc) you can improve security with the following commands:
groupadd mailtrain
useradd -g mailtrain
chown -R mailtrain:mailtrain /var/www/mailtrain 
Now create the following file in /etc/systemd/system/mailtrain.service
[Unit]
 Description=mailtrain
 After=network.target
[Service]
 Type=simple
 User=mailtrain
 WorkingDirectory=/var/www/mailtrain/
 Environment="NODE_ENV=production"
 Environment="PORT=3000"
 ExecStart=/usr/bin/npm run start
 TimeoutSec=15
 Restart=always
[Install]
 WantedBy=multi-user.target
To register the following systemd unit and to launch the new Mailtrain daemon, use the following commands (do not forget to kill your screen session if you used it before):
systemctl daemon-reload
systemctl start mailtrain.service
Now Mailtrain is running under the classic user mailtrain of the mailtrain system group. Configure the Nginx Vhost configuration for your domain Here is my configuration for the Mailtrain Nginx Vhost:
map $http_upgrade $connection_upgrade  
  default upgrade;
  ''      close;
 
server  
  listen 80; 
  listen [::]:80;
  server_name mailtrain.toto.com;
  return 301 https://$host$request_uri;
 
server  
  listen 443 ssl;
  listen [::]:443 ssl;
  server_name mailtrain.toto.com;
  access_log /var/log/nginx/mailtrain.toto.com.access.log;
  error_log /var/log/nginx/mailtrain.toto.com.error.log;
  ssl_protocols TLSv1.2;
  ssl_ciphers EECDH+AESGCM:EECDH+AES;
  ssl_ecdh_curve prime256v1;
  ssl_prefer_server_ciphers on; 
  ssl_session_cache shared:SSL:10m;
  ssl_certificate     /etc/letsencrypt/live/mailtrain.toto.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/mailtrain.toto.com/privkey.pem;
  keepalive_timeout    70; 
  sendfile             on;
  client_max_body_size 0;
  root /var/www/mailtrain;
  location ~ /\.well-known\/acme-challenge  
    allow all;
   
  gzip on; 
  gzip_disable "msie6";
  gzip_vary on; 
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k; 
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
  add_header Strict-Transport-Security "max-age=31536000";
  location /   
    try_files $uri @proxy;
   
  location @proxy  
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_pass http://127.0.0.1:3000;
   
 
Now Nginx is ready. Just start it:
systemctl start nginx
This Nginx vhost will redirect all http requests coming to the Mailtrain process running on the 3000 port. Now it s time to setup Mailtrain! Setup Mailtrain You should be able to access your Mailtrain at https://mailtrain.toto.com Mailtrain is quite simple to configure, Here is my mailer setup. Mailtrain just forwards emails to AWS SES. We only have to plug Mailtrain to AWS SES.
Mailtrain mailer setup
The hostname is provided by AWS SES in the STMP Settings section. Use the 465 port and USE TLS option. Next is providing your AWS SES username and password you generated above and stored somewhere on your computer. One of the issues I encountered is the AWS SES rate limit. Send too many emails too fast will get you flagged as a spammer. So I had to throttle Mailtrain. Because I m a lazy man, I asked Pierre-Gilles Leymarie his setup. Quite easier than determining myself the good one. Here is my setup. Works fine for my soon-to-be 4k subscribers. The idea is: if your AWS SES lets you know you send too fast then just slow down.
Mailtrain to throttle sending emails to AWS SES
Conclusion That s it! You re ready! Almost. You need an HTML template for your newsletter and a list of subscribers. Buf if you re not new in the newsletter field, fleeing Mailchimp because of their expensive prices, you should have them both already. After sending almost ten issues with this setup, I m really happy with it. Open/click rates are the same. When leaving Mailchimp, do not leave any list of subscribers because they ll charge you $8 for a 0 to 500 contacts, that s crazy expensive! About the author The post How to save up to 500 /year switching from Mailchimp to Open Source Mailtrain and AWS SES appeared first on Carl Chenet's Blog.

18 April 2021

Russell Coker: IMA/EVM Certificates

I ve been experimenting with IMA/EVM. Here is the Sourceforge page for the upstream project [1]. The aim of that project is to check hashes and maybe public key signatures on files before performing read/exec type operations on them. It can be used as the next logical step from booting a signed kernel with TPM. I am a long way from getting that sort of thing going, just getting the kernel to boot and load keys is my current challenge and isn t helped due to the lack of documentation on error messages. This blog post started as a way of documenting the error messages so future people who google errors can get a useful result. I am not trying to document everything, just help people get through some of the first problems. I am using Debian for my work, but some of this will apply to other distributions (particularly the kernel error messages). The Debian distribution has the ima-evm-utils but no other support for IMA/EVM. To get this going in Debian you need to compile your own kernel with IMA support and then boot it with kernel command-line options to enable IMA, in recent kernels that includes lsm=integrity as a mandatory requirement to prevent a kernel Oops after mounting the initrd (there is already a patch to fix this). If you want to just use IMA (not get involved in development) then a good option would be to use RHEL (here is their documentation) [2] or SUSE (here is their documentation) [3]. Note that both RHEL and SUSE use older kernels so their documentation WILL lead you astray if you try and use the latest kernel.org kernel. The Debian initrd I created a script named /etc/initramfs-tools/hooks/keys with the following contents to copy the key(s) from /etc/keys to the initrd where the kernel will load it/them. The kernel configuration determines whether x509_evm.der or x509_ima.der (or maybe both) is loaded. I haven t yet worked out which key is needed when.
#!/bin/bash
mkdir -p $ DESTDIR /etc/keys
cp /etc/keys/* $ DESTDIR /etc/keys
Making the Keys
#!/bin/sh
GENKEY=ima.genkey
cat << __EOF__ >$GENKEY
[ req ]
default_bits = 1024
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = v3_usr
[ req_distinguished_name ]
O =  hostname 
CN =  whoami  signing key
emailAddress =  whoami @ hostname 
[ v3_usr ]
basicConstraints=critical,CA:FALSE
#basicConstraints=CA:FALSE
keyUsage=digitalSignature
#keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
#authorityKeyIdentifier=keyid,issuer
__EOF__
openssl req -new -nodes -utf8 -sha1 -days 365 -batch -config $GENKEY \
                -out csr_ima.pem -keyout privkey_ima.pem
openssl x509 -req -in csr_ima.pem -days 365 -extfile $GENKEY -extensions v3_usr \
                -CA ~/kern/linux-5.11.14/certs/signing_key.pem -CAkey ~/kern/linux-5.11.14/certs/signing_key.pem -CAcreateserial \
                -outform DER -out x509_evm.der
To get the below result I used the above script to generate a key, it is the /usr/share/doc/ima-evm-utils/examples/ima-genkey.sh script from the ima-evm-utils package but changed to use the key generated from kernel compilation to sign it. You can copy the files in the certs directory from one kernel build tree to another to have the same certificate and use the same initrd configuration. After generating the key I copied x509_evm.der to /etc/keys on the target host and built the initrd before rebooting.
[    1.050321] integrity: Loading X.509 certificate: /etc/keys/x509_evm.der
[    1.092560] integrity: Loaded X.509 cert 'xev: etbe signing key: 99d4fa9051e2c178017180df5fcc6e5dbd8bb606'
Errors Here are some of the kernel error messages I received along with my best interpretation of what they mean. [ 1.062031] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[ 1.063689] integrity: Problem loading X.509 certificate -74 Error -74 means -EBADMSG, which means there s something wrong with the certificate file. I have got that from /etc/keys/x509_ima.der not being in der format and I have got it from a der file that contained a key pair that wasn t signed.
[    1.049170] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[    1.093092] integrity: Problem loading X.509 certificate -126
Error -126 means -ENOKEY, so the key wasn t in the file or the key wasn t signed by the kernel signing key.
[    1.074759] integrity: Unable to open file: /etc/keys/x509_evm.der (-2)
Error -2 means -ENOENT, so the file wasn t found on the initrd. Note that it does NOT look at the root filesystem. References

8 April 2021

Thorsten Alteholz: My Debian Activities in March 2021

FTP master Things never turn out the way you expect, so this month I was only able to accept 38 packages and rejected none. Due to the freeze, the overall number of packages that got accepted was 88. Debian LTS This was my eighty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 30h. During that time I did LTS and normal security uploads of: I also prepared debdiffs for unstable and/or buster for leptonlib and libebml, which for one reason or another did not result in an upload yet. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirty-third ELTS month. During my allocated time I uploaded: Last but not least I did some days of frontdesk duties. Other stuff On my neverending golang challenge I uploaded (or sponsored for thola dependencies):
golang-github-tombuildsstuff-giovanni, golang-github-apparentlymart-go-userdirs, golang-github-apparentlymart-go-shquot, golang-github-likexian-gokit, olang-gopkg-mail.v2, golang-gopkg-redis.v5, golang-github-facette-natsort, golang-github-opentracing-contrib-go-grpc, golang-github-felixge-fgprof, golang-ithub-gogo-status, golang-github-leanovate-gopter, golang-github-opentracing-basictracer-go, golang-github-lightstep-lightstep-tracer-common, golang-github-o-sourcemap-sourcemap, golang-github-igm-pubsub, golang-github-igm-sockjs-go, golang-github-centrifugal-protocol, golang-github-mna-redisc, golang-github-fzambia-eagle, golang-github-centrifugal-centrifuge, golang-github-chromedp-sysutil, golang-github-client9-misspell, golang-github-knq-snaker, cdproto-gen, golang-github-mattermost-xml-roundtrip-validator, golang-github-crewjam-saml, ssllabs-scan, golang-uber-automaxprocs, golang-uber-goleak, golang-github-k0kubun-go-ansi, golang-github-schollz-progressbar, golang-github-komkom-toml, golang-github-labstack-echo, golang-github-inexio-go-monitoringplugin

7 April 2021

Emmanuel Kasper: Manually install a single node Kubernetes cluster on Debian

Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation

Install necessary packages in a VMInstall a throwaway VM with Vagrant.
apt install vagrant vagrant-libvirt
vagrant init debian/testing64
Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.
awk  -i inplace '1;/^Vagrant.configure\("2"\) do \ config/  print "  config.vm.provider :libvirt do  vm   vm.memory=2048 end" ' Vagrantfile
awk -i inplace '1;/^Vagrant.configure\("2"\) do \ config/ print " config.vm.provider :libvirt do vm vm.cpus=2 end" ' Vagrantfile
Start the VM, login, update the package index.
vagrant up
vagrant ssh
sudo apt update
Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.
sudo apt install --yes --no-install-recommends docker.io curl
Install kubernetes binaries. This will install kubelet, the system service which will manage the containers, and kubectl the user/admin tool to manage the cluster.
sudo apt install --yes kubernetes- node,client  containernetworking-plugins
Although it is not technically mandatory, we will use kubeadm, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.
wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz

sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f kubernetes-server-linux-arm64.tar.gz

sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm
sudo kubeadm version
kubeadm version: &version.Info Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"
Add a kubelet systemd unit:
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/$ RELEASE_VERSION /cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet
and a default config file for kubeadm
RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/$ RELEASE_VERSION /cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
finally we need to help kubelet find the components needed for container networking
echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"'   sudo tee /etc/default/kubelet

Create a clusterInitialize a cluster with kubeadm: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady.
kubectl get nodes 
NAME STATUS ROLES AGE VERSION
testing NotReady control-plane,master 9m9s v1.20.5
At that point you should also have a bunch of containers running on the node:
sudo docker ps --format ' .Names '
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...
The kubelet service also needs an external network plugin to get the cluster in Ready state.
sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059 9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Let s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml
After a dozen of seconds your node should be in ready status.
kubectl get nodes 
NAME STATUS ROLES AGE VERSION
testing Ready control-plane,master 16m v1.20.5

Deploy a test applicationOur node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.
kubectl describe node testing   grep ^Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Let s allow node testing to run user applications:
kubectl taint node testing node-role.kubernetes.io/master-
Deploy a nginx container:
kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content" 
Create a Kubernetes service to access this pod externally:
cat service.yaml

apiVersion: v1
kind: Service
metadata:
name: my-k8s-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: http-content

kubectl create --filename service.yaml
Access the service via IP adress:
curl 192.168.121.63:30000
...
Thank you for using nginx.

NotesI will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node documentation. Blog posts deprecate and disappear, wiki and project docs live longer.

4 April 2021

Russ Allbery: Review: Prince Caspian

Review: Prince Caspian, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #2
Publisher: Collier Books
Copyright: 1951
Printing: 1979
ISBN: 0-02-044240-8
Format: Mass market
Pages: 216
Prince Caspian is the second book of the Chronicles of Narnia in the original publication order (the fourth in the new publication order) and a direct sequel to The Lion, the Witch and the Wardrobe. As much as I would like to say you could start here if you wanted less of Lewis's exploration of secondary-world Christianity and more children's adventure, I'm not sure it would be a good reading experience. Prince Caspian rests heavily on the events of The Lion, the Witch and the Wardrobe. If you haven't already, you may also want to read my review of that book for some introductory material about my past relationship with the series and why I follow the original publication order. Prince Caspian always feels like the real beginning of a re-read. Re-reading The Lion, the Witch and the Wardrobe is okay but a bit of a chore: it's very random, the business with Edmund drags on, and it's very concerned with hitting the mandatory theological notes. Prince Caspian is more similar to the following books and feels like Narnia proper. That said, I have always found the ending of Prince Caspian oddly forgettable. This re-read helped me see why: one of the worst bits of the series is in the middle of this book, and then the dramatic shape of the ending is very strange. MAJOR SPOILERS BELOW for both this book and The Lion, the Witch and the Wardrobe. Prince Caspian opens with the Pevensie kids heading to school by rail at the end of the summer holidays. They're saying their goodbyes to each other at a train station when they are first pulled and then dumped into the middle of a wood. After a bit of exploration and the discovery of a seashore, they find an overgrown and partly ruined castle. They have, of course, been pulled back into Narnia, and the castle is Cair Paravel, their great capital when they ruled as kings and queens. The twist is that it's over a thousand years later, long enough that Cair Paravel is now on an island and has been abandoned to the forest. They discover parts of how that happened when they rescue a dwarf named Trumpkin from two soldiers who are trying to drown him near the supposedly haunted woods. Most of the books in this series have good hooks, but Prince Caspian has one of the best. I adored everything about the start of this book as a kid: the initial delight at being by the sea when they were on their way to boarding school, the realization that getting food was not going to be easy, the abandoned castle, the dawning understanding of where they are, the treasure room, and the extended story about Prince Caspian, his discovery of the Old Narnia, and his flight from his usurper uncle. It becomes clear from Trumpkin's story that the children were pulled back into Narnia by Susan's horn (the best artifact in these books), but Caspian's forces were expecting the great kings and queens of legend from Narnia's Golden Age. Trumpkin is delightfully nonplussed at four school-age kids who are determined to join up with Prince Caspian and help. That's the first half of Prince Caspian, and it's a solid magical adventure story with lots of potential. The ending, alas, doesn't entirely work. And between that, we get the business with Aslan and Lucy in the woods, or as I thought of it even as a kid, the bit where Aslan is awful to everyone for no reason. For those who have forgotten, or who don't care about spoilers, the kids plus Trumpkin are trying to make their way to Aslan's How (formerly the Stone Table) where Prince Caspian and his forces were gathered, when they hit an unexpected deep gorge. Lucy sees Aslan and thinks he's calling for them to go up the gorge, but none of the other kids or Trumpkin can see him and only Edmund believes her. They go down instead, which almost gets them killed by archers. Then, that night, Lucy wakes up and finds Aslan again, who tells her to wake the others and follow him, but warns she may have to follow him alone if she can't convince the others to go along. She wakes them up (which does not go over well), Aslan continues to be invisible to everyone else despite being right there, Susan is particularly upset at Lucy, and everything is awful. But this time they do follow her (with lots of grumbling and over Susan's objections). This, of course, is the right decision: Aslan leads them to a hidden path that takes them over the river they're trying to cross, and becomes visible to everyone when they reach the other side. This is a mess. It made me angry as a kid, and it still makes me angry now. No one has ever had trouble seeing Aslan before, so the kids are rightfully skeptical. By intentionally deceiving them, Aslan puts the other kids in an awful position: they either have to believe Lucy is telling the truth and Aslan is being weirdly malicious, or Lucy is mistaken even though she's certain. It not only leads directly to conflict among the kids, it makes Lucy (the one who does all the right things all along) utterly miserable. It's just cruel and mean, for no purpose. It seems clear to me that this is C.S. Lewis trying to make a theological point about faith, and in a way that makes it even worse because I think he's making a different point than he intended to make. Why is religious faith necessary; why doesn't God simply make himself apparent to everyone and remove the doubt? This is one of the major problems in Christian apologetics, Lewis chooses to raise it here, and the answer he gives is that God only shows himself to his special favorites and hides from everyone else as a test. It's clearly not even a question of intention to have faith; Edmund has way more faith here than Lucy does (since Lucy doesn't need it) and still doesn't get to see Aslan properly until everyone else does. Pah. The worst part of this is that it's effectively the last we see of Susan. Prince Caspian is otherwise the book in which Susan comes into her own. The sibling relationship between the kids is great here in general, but Susan is particularly good. She is the one who takes bold action to rescue Trumpkin, risking herself by firing an arrow into the helmet of one of the soldiers despite being the most cautious of the kids. (And then gets a little defensive about her shot because she doesn't want anyone to think she would miss that badly at short range, a detail I just love.) I identified so much with her not wanting to beat Trumpkin at an archery contest because she felt bad for him (but then doing it anyway). She is, in short, awesome. I was fine with her being the most grumpy and frustrated with the argument over picking a direction. They're all kids, and sometimes one gets grumpy and frustrated and awful to the people around you. Once everyone sees Aslan again, Susan offers a truly excellent apology to Lucy, so it seemed like Lewis was setting up a redemption arc for her the way that he did for Edmund in The Lion, the Witch and the Wardrobe (although I maintain that nearly all of this mess was Aslan's fault). But then we never see Susan's conversation with Aslan, Peter later says he and Susan are now too old to return to Narnia, and that's it for Susan. Argh. I'll have more to say about this later (and it's not an original opinion), but the way Lewis treats Susan is the worst part of this series, and it adds insult to injury that it happens immediately after she has a chance to shine. The rest of the book suffers from the same problem that The Lion, the Witch and the Wardrobe did, namely that Aslan fixes everything in a somewhat surreal wild party and it's unclear why the kids needed to be there. (This is the book where Bacchus and Silenus show up, there is a staggering quantity of wine for a children's book, and Aslan turns a bunch of obnoxious school kids into pigs.) The kids do have more of a role to play this time: Peter and Edmund help save Caspian, and there's a (somewhat poorly motivated) duel that sends up the ending. But other than the brief battle in the How, the battle is won by Aslan waking the trees, and it's not clear why he didn't do that earlier. The ending is, at best, rushed and not worthy of its excellent setup. I was also disappointed that the "wait, why are you all kids?" moment was hand-waved away by Narnia giving the kids magical gravitas. Lewis never felt in control of either The Lion, the Witch and the Wardrobe or Prince Caspian. In both cases, he had a great hook and some ideas of what he wanted to hit along the way, but the endings are more sense of wonder and random Aslan set pieces than anything that follows naturally from the setup. This is part of why I'm not commenting too much on the sour notes, such as the red dwarves being the good and loyal ones but the black dwarves being suspicious and only out for themselves. If I thought bits like that were deliberate, I'd complain more, but instead it feels like Lewis threw random things he liked about children's books and animal stories into the book and gave it a good stir, and some of his subconscious prejudices fell into the story along the way. That said, resolving your civil war children's book by gathering all the people who hate talking animals (but who have lived in Narnia for generations) and exiling them through a magical gateway to a conveniently uninhabited country is certainly a choice, particularly when you wrote the book only two years after the Partition of India. Good lord. Prince Caspian is a much better book than The Lion, the Witch and the Wardrobe for the first half, and then it mostly falls apart. The first half is so good, though. I want to read the book that this could have become, but I'm not sure anyone else writes quite like Lewis at his best. Followed by The Voyage of the Dawn Treader, which is my absolute favorite of the series. Rating: 7 out of 10

31 January 2021

John Goerzen: The Hidden Drawbacks of P2P (And a Defense of Signal)

Not long ago, I posted a roundup of secure messengers with off-the-grid capabilities. Some conversation followed, which led me to consider some of the problems with P2P protocols. P2P and Privacy Brave adopting IPFS has driven a lot of buzz lately. IPFS is essentially a decentralized, distributed web. This concept has a lot of promise. But take a look at the IPFS privacy document. Some things to highlight: So in this case, you have traded giving information about what you request to specific sites to giving it to potentially hundreds of untrusted peers, some of which may be logging this for nefarious purposes. Worse, you have a durable PeerID that can be used for tracking and tied to your IP address a data collector s dream. This PeerID, combined with DHT requests and the CIDs (Content ID) of the things you host (implying you viewed them in the past), can be used to establish a picture of what you are requesting now and requested recently. Similar can be said from everything like Scuttlebutt to GNU Jami; any service that operates on a P2P basis will likely reveal your IP, and tie your identity to it (and your IP address history). In some cases, as with Jami, this would be limited to friends you add; in others, as with Scuttlebutt and IPFS, it could be revealed to anyone. The advantages of P2P are undeniable and profound, but few are effectively addressing the privacy implications. The one I know of that is, Briar, routes all traffic over Tor; every node is reached by a Tor onion service. Federation: somewhat better In a federated model, every client connects to a server, and there are many servers participating in a federation with each other. Matrix and Mastodon are examples of a federated model. In this scenario, only one server your own homeserver can track you by IP. End-to-end encryption is certainly possible in a federated model, and Matrix supports it. This does give a third party (the specific server you use) knowledge of your IP, but that knowledge can be significantly limited. A downside of this approach is that if your particular homeserver is down, you are unable to communicate. Truly decentralized P2P solutions don t have that problem thought they do have a related one, which is that clients communicating with each other must both be online simultaneously in order for messages to be transmitted, and this can be a real challenge for mobile devices. Centralization and Signal Signal is centralized; it has one central server farm, and if it is down, you can t communicate or choose any other server, either. We saw it go down recently after Elon Musk mentioned it. Still, I recommend Signal for the general public. Here s why. Signal brings encryption and privacy to meet people where they re at, not the other way around. People don t have to choose a server, it can automatically recognize contacts that use Signal, it has emojis, attachments, secure voice and video calling, and (aside from the Musk incident), it all just works. It feels like, and is, a polished, modern experience with the bells and whistles people are used to. I m a huge fan of Matrix (aka Element) and even run my own instance. It has huge promise. But it is Not. There. Yet. Why do I saw this about Matrix? Again, I love MAtrix. I use it every day to interact with Matrix, IRC, Slack, and Discord channels. It has a ton of promise. But would I count on it to carry a my car s broken down and I m stranded message? No. How about some of the other options out there? I mentioned Briar above. It s fantastic and its offline options are novel and promising. But in common usage, it can t deliver a message unless both devices are online simultaneously, and doesn t run on iOS (though both are being worked on). It also can t send photos or do voice or video calling. Some of these same limitations apply to most of the other Signal alternatives also. either that, or they are encryption-optional, or terribly hard to set up and use. I recently mentioned Status, which shows a ton of promise, but has no voice or video calling capabilities. Scuttlebutt is a fantastic protocol with extremely difficult onboarding (lengthy process, error-prone finding a pub, multi-GB initial download, etc.) And many of these leak IP addresses as discussed above. So Signal gives people: If you are going to tell someone, it s so EASY to get your texts away from Facebook and AT&T , then Signal is the thing you ve got to point them to. It may not be in two years, but for now, it is. Do not let the perfect be the enemy of the good. It advances the status quo without harming usability, which nothing else does yet. I am aware of all of the very legitimate criticisms of Signal. They are real and they are why I am excited that there are so many alternatives with promise, some of which I use actively. Let us technical people use, debug, contribute to, and evangelize the alternatives. And while we re doing that, tell Grandma to contact us on Signal.

4 January 2021

Iustin Pop: Year 2020 review

Year 2020. What a year! Sure, already around early January there were rumours/noise about Covid-19, but who would have thought where it will end up! Thankfully, none of my close or extended family was directly (medically) affected by Covid, so I/we had a privileged year compared to so many other people. I thought how to write a mini-summary, but prose is too difficult, so let s just go month-by-month. Please note that my memory is fuzzy after 9 months cooked up in the apartment, so things could 1 month compared to what I wrote.

Timeline

January Ski weekend. Skiing is awesome! Cancelling a US work trip since there will be more opportunities soon (har har!).

February Ski vacation. Yep, skiing is awesome. Can t wait for next season (har har!). Discussions about Covid start in the office, but more is this scary or just interesting? (yes, this was before casualties). Then things start escalating, work-from-home at least partially, etc. etc. Definitely not just intersting anymore. In Garmin-speak, I got ~700+ intensity minutes in February (correlates with activity time, but depends on intensity of the effort whether 1:1 or 2 intensity minutes for one wall-clock minute).

March Sometimes during the month, my workplace introduces mandatory WFH. I remember being the last person in our team in the office, on the last day we were allowed to work, and cleaning my desk/etc., thinking all this, and we ll be back in 3 weeks or so . Har har! I buy a webcam, just in case WFH gets extended. And start to increase my sports - getting double the intensity minutes (1500+).

April Switzerland enters the first, hard, lockdown. Or was it late March? Not entirely sure, but in my mind March was the opening, and April was the first main course. It is challenging, having to juggle family and work and stressed schedule, but also interesting. Looking back, I think I liked April the most, as people were actually careful at that time. I continue upgrading my home office - new sound system, so that I don t have to plug in/plug out cables. 1700+ intensity minutes this month.

May Continued WFH, somewhat routine now. My then internet provider started sucking hard, so I upgrade with good results. I m still happy, half a year later (quite happy, even). Still going strong otherwise, but waiting for summer vacation, whatever it will be. A tiny bit more effort, so 1800 intensity minutes in May.

June Switzerland relaxes the lock down, but not my company, so as the rest of the family goes out and about, I start feeling alone in the apartment. And somewhat angry at it, which impacts my sports (counter-intuitively), so I only get 1500 intensity minutes. I go and buy a coffee machine a real one, that takes beans and grinds them, so I get to enjoy the smell of freshly-ground coffee and the fun of learning about coffee beans, etc. But it occupies the time. On the work/job front, I think at this time I finally got a workstation for home, instead of a laptop (which was ultra-portable too), so together with the coffee machine, it feels like a normal work environment. Well, modulo all the people. At least I m not crying anymore every time I open a new tab in Chrome

July Situation is slowly going better, but no, not my company. Still mandatory WFH, with (if I recall correctly) one day per week allowed, and no meeting other people. I get angrier, but manage to channel my energy into sports, almost doubly my efforts in July - 2937 intensity minutes, not quite reaching the 3000 magic number. I buy more stuff to clean and take care of my bicycles, which I don t really use. So shopping therapy too.

August The month starts with a one week family vacation, but I take a bike too, so I manage to put in some effort (it was quite nice riding TBH). A bit of changes in the personal life (nothing unexpected), which complicates things a bit, but at this moment I really thought Switzerland is going to continue to decrease in infections/R-factor/etc. so things will get back to normal, right? My company expands a bit the work-from-office part, so I m optimistic. Sports wise, still going strong, 2500 intensity minutes, preparing for the single race this year.

September The personal life changes from August start to stabilise, so things become again routine, and I finally get to do a race. Life was good for an extended weekend (well, modulo race angst, but that s part of the fun), and I feel justified to take it slow the week after the race. And the week after that too. I end up the month with close, but not quite, 1900 intensity minutes.

October October starts with school holidays and a one week family vacation, but I feel demotivated. Everything is closing down again (well, modulo schools), and I actually have difficulty getting re-adjusted to no longer being alone in the apartment during the work hours. I only get ~1000 intensity minutes in October, mainly thanks to good late autumn weather and outside rides. And I start playing way more computer games. I also sell my PS4, hoping to get a PS5 next month.

November November continues to suck. I think my vacation in October was actually detrimental - it broke my rhythm, I don t really do sport anymore, not consistently at least, so I only get 700+ intensity minutes. And I keep playing computer games, even if I missed the PS5 ordering window; so I switch to PC gaming. My home office feels very crowded, so as kind of anti-shopping therapy, I sell tons of smallish stuff; can t believe how much crap I kept around while not really using it. I also manage to update/refresh all my Debian packages, since next freeze approaches. Better than for previous releases, so it feels good.

December December comes, end of the year, the much awaited vacation - which we decide to cancel due to the situation in whole of Switzerland (and neighbouring countries). I basically only play computer games, and get grand total of 345 activity minutes this month. And since my weight is inversely correlated to my training, I m basically back at my February weight, having lost all the gains I made during the year. I mean, having gained back all the fat I lost. Err, you know what I mean; I m back close to my high-watermark, which is not good.

Conclusion I was somehow hoping that the end of the year will allow me to reset and restart, but somehow - a few days into January - it doesn t really feel so. My sleep schedule is totally ruined, my motivation is so-so, and I think the way I crashed in October was much harder/worse than I realised at the time, but in a way expected for this crazy year. I have some projects for 2021 - or at least, I m trying to make up a project list - in order to get a bit more structure in my continued stuck inside the house part, which is especially terrible when on-call. I don t know how the next 3-6 months will evolve, but I m thankful that so far, we are all healthy. Actually, me personally I ve been healthier physically than in other years, due to less contact with other people. On the other side, thinking of all the health-care workers, or even service workers, my IT job is comfy and all I am is a spoiled person (I could write many posts on specifically this topic). I really need to up my willpower and lower my spoil level. Hints are welcome :( Wish everybody has a better year in 2021.

25 December 2020

Niels Thykier: Improvements to IntelliJ/PyCharm support for Debian packaging files

I have updated my debpkg plugin for IDEA (e.g. IntelliJ, PyCharm, Android Studios) to v0.0.8. Here are some of the changes since last time I wrote about the plugin. New file types supported Links for URLs and bug closes There are often links in deb822 files or the debian/changelog and as of v0.0.8, the plugin will now highlight them and able you to easily open them via your browser. In the deb822 case, they generally appear in the Homepage field, the Vcs-* fields or the Format field of the debian/copyright field. For the changelog file, they often appear in the form of bug Closes statements such as the #123456 in Closes: #123456 , which is a reference to https://bugs.debian.org/123456. Improvements to debian/control The dependency validator now has per-field knowledge. This enables it to flag dependency relations in the Provides field that uses operators other than = (which is the only operator that is supported in that field). It also knows which fields support build-profile restrictions. It in theory also do Architecture restrictions, but I have not added it among other because it gets a bit spicy around binary packages. (Fun fact, you can have Depends: foo [amd64] but only for arch:any packages.) The plugin now suggests adding a Rules-Requires-Root field to the Source stanza along with a quick fix for adding the field. Admittedly, it was mostly done as exercise for me to learn how to do that kind of feature. Support for machine-readable debian/copyright The plugin now has a dedicated file type for debian/copyright that follows the machine-readable format. It should auto-detect it automatically based on the presence of the Format field being set to https://www.debian.org/doc/packaging-manuals/copyright-format/1.0. Sadly, I have not found the detection reliable in all cases, so you may have to apply it manually. With the copyright format, the plugin now scans the Files fields for common issues like pointing on non-existing paths and invalid escape sequences. When the plugin discovers a path that does not match anything, it highlights the part of the path that cannot be found. As an example, consider the pattern src/foo/data.c and that src/foo exist but data.c does not exist, then the plugin will only flag the data.c part of src/foo/data.c as invalid. The plugin will also suggest a quick fix if you a directory into the Files field to replace it with a directory wildcard (e.g. src/foo -> src/foo/* ), which is how the spec wants you to reference every file beneath a given directory. Finally, when the plugin can identify part of the path, then it will turn it into a link (reference in IDEA lingo). This means that you can CTRL + click on it to jump to the file. As a side-effect, it also provides refactoring assistance for renaming files, where renaming a file will often be automatically reflected in debian/copyright. This use case is admittedly mostly relevant people, who are both upstream and downstream maintainer. Folding support improvement for .dsc/.changes/.buildinfo files The new field types appeared with two cases, where I decided to improve the folding support logic. The first was the GPG signature (if present), which consists of two parts. The top part with is mostly a single line marker but often followed by a GPG armor header (e.g. Hash: SHA512 ) and then the signature blob with related marker lines around it. Both cases are folded into a single marker line by default to reduce their impact on content in the editor view. The second case was the following special-case pattern:
Files:
 <md5> <size> filename
Checksums-Sha256:
 <sha256> <size> filename
In the above example, where there is exactly on file name, those fields will by default now be folded into:
Files: <md5> <size> filename
Checksums-Sha256: <sha256> <size> filename
For all other multi-line fields, the plugin still falls back to a list of known fields to fold by default as in previous versions. Spellchecking improvements The plugin already supported selective spell checking in v0.0.3, where it often omitted spell checking for fields (in deb822 files) where it did not make sense. The spell check feature has been improved by providing a list of known packaging terms/jargo used by many contributors (so autopkgtests is no longer considered a typo). This applies to all file types (probably also those not handled by the plugin as it is just a dictionary). Furthermore, the plugin also attempts discover common patterns (e.g. file names or command arguments) and exempt these from spell checking in the debian/changelog. This also includes manpage references such as foo.1 or foo(1) . It is far from perfect and relies on common patterns to exclude spell checking. Nonetheless, it should reduce the number of false positive considerably. Feedback welcome Please let me know if you run into bugs or would like a particular feature implemented. You can submit bug reports and feature requests in the issue tracker on github.

8 December 2020

Fran ois Marier: Opting your domain out of programmatic advertising

A few years ago, the advertising industry introduced the ads.txt project in order to defend against widespread domain spoofing vulnerabilities in programmatic advertising. I decided to use this technology to opt out of having ads sold for my domains, at least through ad exchanges which perform this check, by hosting a text file containing this:
contact=ads@fmarier.org
at the following locations: (In order to get this to work on my blog, running Ikiwiki on Branchable, I had to disable the txt plugin in order to get ads.txt to be served as a plain text file instead of being automatically rendered as HTML.)

Specification The key parts of the specification for our purposes are:
[3.1] If the server response indicates the resource does not exist (HTTP Status Code 404), the advertising system can assume no declarations exist and that no advertising system is unauthorized to buy and sell ads on the website. [3.2.1] Some publishers may choose to not authorize any advertising system by publishing an empty ads.txt file, indicating that no advertising system is authorized to buy and sell ads on the website. So that consuming systems properly read and interpret the empty file (differentiating between web servers returning error pages for the /ads.txt URL), at least one properly formatted line must be included which adheres to the format specification described above.
As you can see, the specification sadly ignores RFC8615 and requires that the ads.txt file be present directly in the root of your web server, like the venerable robots.txt file, but unlike the newer security.txt standard. If you don't want to provide an email address in your ads.txt file, the specification recommends using the following line verbatim:
placeholder.example.com, placeholder, DIRECT, placeholder

Validation A number of online validators exist, but I used the following to double-check my setup:

26 October 2020

Marco d'Itri: RPKI validation with FORT Validator

This article documents how to install FORT Validator (an RPKI relying party software which also implements the RPKI to Router protocol in a single daemon) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:
cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END
cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100
Package: fort-validator rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END
apt update
Before starting, make sure that curl (or wget) and the web PKI certificates are installed:
apt install curl ca-certificates
If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.
echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
    debconf-set-selections
Install the package as usual:
apt install fort-validator
You may also install rpki-client and gortr on Debian 10, or maybe cfrpki and gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the good packaging practices of Linux distributions.

Marco d'Itri: RPKI validation with OpenBSD's rpki-client and Cloudflare's gortr

This article documents how to install rpki-client (an RPKI relying party software, the actual validator) and gortr (which implements the RPKI to Router protocol) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:
cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END
cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100
Package: gortr rpki-client rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END
apt update
Before starting, make sure that curl (or wget) and the web PKI certificates are installed:
apt install curl ca-certificates
If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.
echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
    debconf-set-selections
Install the packages as usual:
apt install rpki-client gortr
And then configure rpki-client to generate its output in the the JSON format needed by gortr:
echo 'OPTIONS=-j' > /etc/default/rpki-client
You may manually start the service unit to immediately generate the data instead of waiting for the next timer run:
systemctl start rpki-client &
gortr too needs to be configured to use the JSON data generated by rpki-client:
echo 'GORTR_ARGS=-bind :323 -verify=false -checktime=false -cache /var/lib/rpki-client/json' > /etc/default/gortr
And then it needs to be restarted to use the new configuration:
systemctl restart gortr
You may also install FORT Validator on Debian 10, or maybe cfrpki with gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the packaging practices of Linux distributions.

Next.

Previous.